703 research outputs found

    How does gas cool in DM halos?

    Get PDF
    In order to study the process of cooling in dark-matter (DM) halos and assess how well simple models can represent it, we run a set of radiative SPH hydrodynamical simulations of isolated halos, with gas sitting initially in hydrostatic equilibrium within Navarro-Frenk-White (NFW) potential wells. [...] After having assessed the numerical stability of the simulations, we compare the resulting evolution of the cooled mass with the predictions of the classical cooling model of White & Frenk and of the cooling model proposed in the MORGANA code of galaxy formation. We find that the classical model predicts fractions of cooled mass which, after about two central cooling times, are about one order of magnitude smaller than those found in simulations. Although this difference decreases with time, after 8 central cooling times, when simulations are stopped, the difference still amounts to a factor of 2-3. We ascribe this difference to the lack of validity of the assumption that a mass shell takes one cooling time, as computed on the initial conditions, to cool to very low temperature. [...] The MORGANA model [...] better agrees with the cooled mass fraction found in the simulations, especially at early times, when the density profile of the cooling gas is shallow. With the addition of the simple assumption that the increase of the radius of the cooling region is counteracted by a shrinking at the sound speed, the MORGANA model is also able to reproduce for all simulations the evolution of the cooled mass fraction to within 20-50 per cent, thereby providing a substantial improvement with respect to the classical model. Finally, we provide a very simple fitting function which accurately reproduces the cooling flow for the first ~10 central cooling times. [Abridged]Comment: 15 pages, accepted by MNRA

    Multi-Phase epidemic model by a Markov chain

    Get PDF
    In this paper we propose a continuous-time Markov chain to describe the spread of an infective and non-mortal disease into a community numerically limited and subjected to an external infection. We make a numerical simulation that shows tendencies for recurring epidemic outbreaks and for fade-out or extinction of the infection

    Impact of Processing Costs on Service Chain Placement in Network Functions Virtualization

    Get PDF
    The Network Functions Virtualization (NFV) paradigm is the most promising technique to help network providers in the reduction of capital and energy costs. The deployment of virtual network functions (VNFs) running on generic x86 hardware allows higher flexibility than the classical middleboxes approach. NFV also reduces the complexity in the deployment of network services through the concept of service chaining, which defines how multiple VNFs can be chained together to provide a specific service. As a drawback, hosting multiple VNFs in the same hardware can lead to scalability issues, especially in the processing-resource sharing. In this paper, we evaluate the impact of two different types of costs that must be taken into account when multiple chained VNFs share the same processing resources: the upscaling costs and the context switching costs. Upscaling costs are incurred by VNFs multi-core implementations, since they suffer a penalty due to the needs of load balancing among cores. Context switching costs arise when multiple VNFs share the same CPU and thus require the loading/saving of their context. We model through an ILP problem the evaluation of such costs and we show their impact in a VNFs consolidation scenario, when the x86 hardware deployed in the network is minimized

    Disaster-Resilient Control Plane Design and Mapping in Software-Defined Networks

    Full text link
    Communication networks, such as core optical networks, heavily depend on their physical infrastructure, and hence they are vulnerable to man-made disasters, such as Electromagnetic Pulse (EMP) or Weapons of Mass Destruction (WMD) attacks, as well as to natural disasters. Large-scale disasters may cause huge data loss and connectivity disruption in these networks. As our dependence on network services increases, the need for novel survivability methods to mitigate the effects of disasters on communication networks becomes a major concern. Software-Defined Networking (SDN), by centralizing control logic and separating it from physical equipment, facilitates network programmability and opens up new ways to design disaster-resilient networks. On the other hand, to fully exploit the potential of SDN, along with data-plane survivability, we also need to design the control plane to be resilient enough to survive network failures caused by disasters. Several distributed SDN controller architectures have been proposed to mitigate the risks of overload and failure, but they are optimized for limited faults without addressing the extent of large-scale disaster failures. For disaster resiliency of the control plane, we propose to design it as a virtual network, which can be solved using Virtual Network Mapping techniques. We select appropriate mapping of the controllers over the physical network such that the connectivity among the controllers (controller-to-controller) and between the switches to the controllers (switch-to-controllers) is not compromised by physical infrastructure failures caused by disasters. We formally model this disaster-aware control-plane design and mapping problem, and demonstrate a significant reduction in the disruption of controller-to-controller and switch-to-controller communication channels using our approach.Comment: 6 page

    A disaster-resilient multi-content optical datacenter network architecture

    Get PDF
    Cloud services based on datacenter networks are becoming very important. Optical networks are well suited to meet the demands set by the high volume of traffic between datacenters, given their high bandwidth and low-latency characteristics. In such networks, path protection against network failures is generally ensured by providing a backup path to the same destination, which is link-disjoint to the primary path. This protection fails to protect against disasters covering an area which disrupts both primary and backup resources. Also, content/service protection is a fundamental problem in datacenter networks, as the failure of a single datacenter should not cause the disappearance of a specific content/service from the network. Content placement, routing and protection of paths and content are closely related to one another, so the interaction among these should be studied together. In this work, we propose an integrated ILP formulation to design an optical datacenter network, which solves all the above-mentioned problems simultaneously. We show that our disaster protection scheme exploiting anycasting provides more protection, but uses less capacity, than dedicated single-link protection. We also show that a reasonable number of datacenters and selective content replicas with intelligent network design can provide survivability to disasters while supporting user demands

    How to Integrate Machine-Learning Probabilistic Output in Integer Linear Programming: a case for RSA

    Get PDF
    We integrate machine-learning-based QoT estimation in reach constraints of an integer linear program (ILP) for routing and spectrum assignment (RSA), and develop an iterative solution for QoT-aware RSA. Results show above 30% spectrum savings compared to solving RSA with ILP using traditional margined reach computation

    Energy-Efficient VoD content delivery and replication in integrated metro/access networks

    Get PDF
    Today's growth in the demand for access bandwidth is driven by the success of the Video-on-Demand (VoD) bandwidth-consuming service. At the current pace at which network operators increase the end users' access bandwidth, and with the current network infrastructure, a large amount of video traffic is expected to flood the core/metro segments of the network in the near future, with the consequent risk of congestion and network disruption. There is a growing body of research studying the migration of content towards the users. Further, the current trend towards the integration of metro and access segments of the network makes it possible to deploy Metro Servers (MSes) that may serve video content directly from the novel integrated metro/access segment to keep the VoD traffic as local as possible. This paper investigates a potential risk of this solution, which is the increase in the overall network energy consumption. First, we identify a detailed power model for network equipment and MSes, accounting for fixed and load-proportional contributions. Then, we define a novel strategy for controlling whether to switch MSes and network interfaces on and off so as to strike a balance between the energy consumption for content transport through the network and the energy consumption for processing and storage in the MSes. By means of simulations and taking into account real values for the equipment power consumption, we show that our strategy is effective in providing the least energy consumption for any given traffic load

    Dual-Stage Planning for Elastic Optical Networks Integrating Machine-Learning-Assisted QoT Estimation

    Get PDF
    Following the emergence of Elastic Optical Networks (EONs), Machine Learning (ML) has been intensively investigated as a promising methodology to address complex network management tasks, including, e.g., Quality of Transmission (QoT) estimation, fault management, and automatic adjustment of transmission parameters. Though several ML-based solutions for specific tasks have been proposed, how to integrate the outcome of such ML approaches inside Routing and Spectrum Assignment (RSA) models (which address the fundamental planning problem in EONs) is still an open research problem. In this study, we propose a dual-stage iterative RSA optimization framework that incorporates the QoT estimations provided by a ML regressor, used to define lightpaths' reach constraints, into a Mixed Integer Linear Programming (MILP) formulation. The first stage minimizes the overall spectrum occupation, whereas the second stage maximizes the minimum inter-channel spacing between neighbor channels, without increasing the overall spectrum occupation obtained in the previous stage. During the second stage, additional interference constraints are generated, and these constraints are then added to the MILP at the next iteration round to exclude those lightpaths combinations that would exhibit unacceptable QoT. Our illustrative numerical results on realistic EON instances show that the proposed ML-assisted framework achieves spectrum occupation savings up to 52.4% (around 33% on average) in comparison to a traditional MILP-based RSA framework that uses conservative reach constraints based on margined analytical models

    Successful Treatment of an MTBE-impacted Aquifer Using a Bioreactor Self-colonized by Native Aquifer Bacteria

    Get PDF
    A field-scale fixed bed bioreactor was used to successfully treat an MTBE-contaminated aquifer in North Hollywood, CA without requiring inoculation with introduced bacteria. Native bacteria from the MTBE-impacted aquifer rapidly colonized the bioreactor, entering the bioreactor in the contaminated groundwater pumped from the site, and biodegraded MTBE with greater than 99 % removal efficiency. DNA sequencing of the 16S rRNA gene identified MTBE-degrading bacteria Methylibium petroleiphilum in the bioreactor. Quantitative PCR showed M. petroleiphilum enriched by three orders of magnitude in the bioreactor above densities pre-existing in the groundwater. Because treatment was carried out by indigenous rather than introduced organisms, regulatory approval was obtained for implementation of a full-scale bioreactor to continue treatment of the aquifer. In addition, after confirmation of MTBE removal in the bioreactor to below maximum contaminant limit levels (MCL; MTBE = 5 μg L−1), treated water was approved for reinjection back into the aquifer rather than requiring discharge to a water treatment system. This is the first treatment system in California to be approved for reinjection of biologically treated effluent into a drinking water aquifer. This study demonstrated the potential for using native microbial communities already present in the aquifer as an inoculum for ex-situ bioreactors, circumventing the need to establish non-native, non-acclimated and potentially costly inoculants. Understanding and harnessing the metabolic potential of native organisms circumvents some of the issues associated with introducing non-native organisms into drinking water aquifers, and can provide a low-cost and efficient remediation technology that can streamline future bioremediation approval processes
    • …
    corecore